DiscoverAI Security OpsAzure AI Foundry Guardrails | Episode 27
Azure AI Foundry Guardrails | Episode 27

Azure AI Foundry Guardrails | Episode 27

Update: 2025-10-30
Share

Description

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


Azure AI Foundry Guardrails | Episode 27

In this episode of BHIS Presents: AI Security Ops, we explore how to configure content filters for AI models using the Azure AI Fooundry guardrails and controls interface. Whether you're building secure demos or deploying models in production, this walkthrough shows how to block unwanted content, enforce policy, and maintain compliance.

Topics Covered:

  •  Changing default filters for demo compliance
  •  Setting up a system prompt and understanding its role
  •  Adding regex terms to block specific content
  •  Creating and configuring a custom filter: “tech demo guardrails”
  •  Input-side filtering: inspecting user text before model access
  •  Safety vs. security categories in filtering
  •  Enabling prompt shields for indirect jailbreak detection


This video is ideal for developers, security engineers, and anyone working with AI systems who needs to implement layered defenses and ensure responsible model behavior.


Why This Matters
By implementing layered security—block lists, input and output filters—you protect sensitive data, comply with policy, and maintain a safe user experience.

#AIsecurity #GuardrailsAndControls #ContentFiltering #PromptSecurity #RegexFiltering #BHIS #AIModelSafety #SystemPromptSecurity

Brought to you by Black Hills Information Security 

https://www.blackhillsinfosec.com


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/


  • (00:00 ) - Introduction & Overview

  • (01:17 ) - Changing the Default Content Filter for Demo Compliance

  • (02:00 ) - Setting Up a System Prompt and Its Purpose

  • (04:26 ) - Adding a New Term (“dogs”) to the Content Filter (Regex Example)

  • (05:04 ) - Creating and Configuring a Content Filter Named “Tech Demo Guardrails”

  • (05:35 ) - How Input-Side Filters Inspect and Block Unwanted Content

  • (06:01 ) - Overview of Safety Categories vs. Security Categories

  • (07:15 ) - Enabling Prompt Shields for Indirect Jailbreak Detection (Not Used in Demo)

  • (08:30 ) - Summary & Next Steps

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Azure AI Foundry Guardrails | Episode 27

Azure AI Foundry Guardrails | Episode 27

Black Hills Information Security